Few-shot Prompting
Learn about the few-shot prompting technique and its limitations.
We'll cover the following
Overview#
Few-shot prompting is a technique in which a model is trained to perform a specific task with limited or few training examples, also called shots, typically in the range of 1–10. In few-shot learning, the model is fine-tuned on a smaller dataset of examples, often referred to as a support set, to learn the underlying patterns and rules of the task. The model is then tested on a separate dataset, called the query set, to evaluate its performance. Few-shot prompting can be helpful when the data available for training is limited or costly to obtain.
Few-shot prompting facilitates contextual learning by presenting examples within the prompt to guide the model toward improved performance. These examples act as conditioning for subsequent instances where we want the model to produce the desired response.
One-shot prompts#
A one-shot prompt is a type of few-shot prompt that only contains a single example and the input data. Some models are efficient enough to understand the task completely with just one example.
Prompt |
2 + 2 = 4 4 + 4 = |
Output |
8 |
Here,
Examples#
Let's look at some examples of few-shot prompting in the context of different prompt types.
General instructions#
Few-shot prompting is great as it allows you to exert more control over the model. Using this technique, we can make the model accomplish tasks that otherwise would have been impossible. For example, let's see how we can generate sentences for fake words.
Prompt |
Bearwarts are rashes that appear on the skin when someone makes contact with a poisonous plant. An example of a sentence that uses the word bearwarts is: After playing with plants in the forest, Sara noticed she got bearwarts on her arms. Caroyanker is a sculpting tool used to make ice sculptures. An example of a sentence that uses the word caroyanker is: |
Output |
The sculptor used a caroyanker to shape the intricate details of the ice sculpture. |
In this example, we use fake words with no real meaning and instruct the model to write a sentence containing the fake word. We give an example sentence for the first word, “bearwarts” and then leave it up to the model to create a sentence for the second, “caroyanker.”
Classification#
Let's say we have a text classification task where you want to classify customer reviews into positive or negative sentiments. With few-shot prompting, we can provide a few examples of positive and negative reviews and some input data that needs to be labeled.
Prompt |
This product is amazing! It exceeded my expectations. // positive I love this restaurant. The food is delicious and the service is excellent. // positive The customer support was terrible. They were unresponsive and rude. // negative I'm very disappointed with the quality of the food. // |
Output |
negative |
Generally, for classification prompts, the model responds with a statement like, “This review is negative.” However, we can observe here that the response is catered to the examples we mentioned in the prompt. It simply outputs “negative.” We have primed the model to respond in a certain way using few-shot prompting.
Information extraction#
Few-shot prompting technique can also be used for information extraction. As we have more freedom in this technique, we can guide the model to give a response in a desired format so we can easily use the output where required without changing anything. It can help save time and resources.
Prompt |
Text 1: In the vibrant city of Paris, Emma, James, and Olivia strolled along the romantic streets adorned with charming cafes and stunning architecture. The Eiffel Tower, standing tall in the distance, cast a golden hue under the warm sunset. Emma's vibrant red dress matched the blossoming roses that adorned the quaint Montmartre neighborhood, while James sported a dashing navy blue suit that complemented the cobblestone streets. Olivia, with her radiant smile, carried a bouquet of purple and white flowers, adding a touch of elegance to their picturesque Parisian adventure. Output 1: { "names":["Emma","James","Olivia"], "places":["Paris","Eiffel Tower","Montmartre"], "colors":["golden","red","navy blue","purple","white"] } Text 2: In the bustling city of Tokyo, Sakura, a young artist, found inspiration in the vibrant surroundings. She wandered through the winding streets of Shibuya, where the neon lights illuminated the night in a dazzling display of colors. As she made her way to the peaceful gardens of Shinjuku Gyoen, Sakura marveled at the delicate cherry blossoms, their soft pink petals creating a breathtaking canopy overhead. Later, she sought solace in the tranquil beauty of Mount Fuji, its majestic peak towering above the serene landscape, adorned in shades of white and blue. With her sketchbook in hand, Sakura captured the essence of these remarkable places, blending the vibrant hues of the city with the serene colors of nature. Output 2: { "names":["Sakura"], "places":["Tokyo","Shibuya","Shinjuku Gyoen","Mount Fuji"], "colors":[""pink","white","blue"] } Text 3: As the sun dipped below the horizon, casting a warm orange glow over the serene coastal town of Santorini, Maria and Alex embarked on a romantic stroll along the cobblestone pathways. They admired the breathtaking view of the azure Aegean Sea and the whitewashed buildings of Oia, their vibrant blue domes standing out against the backdrop. Maria's flowing yellow sundress mirrored the vibrant sunflowers that adorned the picturesque gardens, while Alex's crisp white shirt contrasted with the golden hues of the surrounding cliffs. Hand in hand, they made their way to a charming seaside taverna, where they enjoyed a delicious meal amidst the colorful bougainvillea vines cascading in shades of magenta, purple, and pink. Output 3: |
Output |
{ "names":["Maria","Alex"], "places":["Santorini","Aegean Sea","Oia"], "colors":["orange","azure","blue","yellow","white","golden","magenta","purple","pink"] } |
In this example, we use few-shot prompting to control the output format. In the first two example texts, we extract the names, places, and colors in the passage and display them in JSON format. The model successfully extracts and displays the required information for the third text passage accordingly.
Limitations#
While few-shot prompting is a powerful technique, it can have limitations that may lead to inaccurate responses for some use cases. Here are some reasons why few-shot prompting can be ineffective:
- Generalization: Although few-shot prompting enables learning from limited examples, the model’s generalization ability may become affected. It can struggle with novel or out-of-distribution examples as it relies heavily on the few examples provided. It can also introduce a bias if the examples are unrepresentative or contain inherent biases.
- Task complexity: Few-shot prompting works better for relatively simpler tasks where a few examples are sufficient to capture the underlying patterns. For complex tasks requiring deep understanding, like reasoning and problem-solving, more training examples or a different approach might be necessary.
- Limited exploration: Due to the limited examples, few-shot prompts may restrict the model’s ability to explore a wider range of possibilities and variations in the generated outputs. This can result in lack of variety and creativity in the responses.
- Lack of contextual understanding: Few-shot prompting might struggle with understanding the context and nuances of the task or the specific examples, resulting in less accurate or contextually inappropriate responses.
It's important to carefully consider these limitations and assess whether few-shot prompting suits the specific task at hand or not.
Zero-shot Prompting
Chain-of-Thought Prompting